95 research outputs found

    TiGL - An Open Source Computational Geometry Library for Parametric Aircraft Design

    Get PDF
    This paper introduces the software TiGL: TiGL is an open source high-fidelity geometry modeler that is used in the conceptual and preliminary aircraft and helicopter design phase. It creates full three-dimensional models of aircraft from their parametric CPACS description. Due to its parametric nature, it is typically used for aircraft design analysis and optimization. First, we present the use-case and architecture of TiGL. Then, we discuss it's geometry module, which is used to generate the B-spline based surfaces of the aircraft. The backbone of TiGL is its surface generator for curve network interpolation, based on Gordon surfaces. One major part of this paper explains the mathematical foundation of Gordon surfaces on B-splines and how we achieve the required curve network compatibility. Finally, TiGL's aircraft component module is introduced, which is used to create the external and internal parts of aircraft, such as wings, flaps, fuselages, engines or structural elements

    Concepts for the efficient Monte Carlo-based treatment plan optimization in radiotherapy

    Get PDF
    Monte Carlo (MC) dose calculation algorithms are regarded as the gold standard in intensity-modulated radiation therapy (IMRT). Simply adding a MC dose calculation engine to a standard IMRT optimization framework is possible but computationally inefficient. Thus, the optimization would be too time consuming for clinical practice. In this work we developed a hybrid algorithm for the treatment plan optimization that combines the accuracy of MC simulations with the efficiency of less precise dose calculation algorithms. Two methods are introduced that allow a rapid convergence of the iterative optimization algorithm and preserve the efficiency of the MC dose calculation. The performance of the hybrid optimization algorithm is analyzed on different treatment sites. The results are compared against a reference optimization algorithm, which is based on MC simulations in the standard IMRT framework. For this comparison we evaluated several indicators of treatment plan quality, convergence properties, calculation times and efficiency ratios. The efficiency of the optimization could be improved from originally 10-30% to 80-95%. Due to this improvement the calculation times could be reduced to 2-28 minutes, depending on the treatment plan complexity. At the same time, the treatment plan quality could be maintained compared to the reference algorithm

    Global parameterization of trimmed NURBS based CAD geometries for mesh deformation

    Get PDF
    In this talk we want to present a novel global parameterization scheme for points on a CAD geometry. This algorithm can then be used to compute mesh deformations for changes of the underlying geometry. The creation of structured meshes is a time-consuming trial and error process, which is not suitable for e.g. automatic optimization. Particularly gradient based optimization often performs only small changes of the design variables, which should result in only slightly different meshes. Therefore, methods are required that deform an initial mesh based on the change of the initial geometry. Here, we present a projection method that computes a bijective mapping between a point in space and its global parameterization with respect to the trimmed NURBS based CAD geometry. After a geometry change, the parameterized points can be back-projected into 3D space which eventually yields the deformed mesh. Providing support for trimmed NURBS geometries is particularly challenging, as their surface parameters u;v mighty be valid only in a non-rectangular trimming region. This region however varies on geometry changes, which would lead to a loss of mesh points, if this is not properly handled. To overcome this issue, we reparametrize the trimming region such that the domain of some new parameters u0;v0 is rectangular. Our projection algorithm is separated into three different problems: first – finding the face a mesh point belongs; second – reparametrize the face to get a bijective mapping; third – project the point onto the reparametrized surface. The first and third problem are comparable simple and can be performed using standard CAD algorithms. For the reparameterization problem, we provide a method that converts the 2d trimming domain of the NURBS into a series of two-dimensional untrimmed patches. This is done by first subdividing the original NURBS face into multiple faces. Then, we identify or create four boundary curves for each of these sub-faces. The four boundary curves are finally used to create a reparameterization patch e.g. using the Coons method. The projection of a point leads only to a unique solution, if the reparameterization patch is invertible. We check invertibility of the patch, by separating the patch into rational Bezier spline surfaces and check that their Jacobian determinant is larger than 0. This strategy allows a large range of different face types, including faces with holes and faces with more or less than four boundary curves. The back-projection method is analogous, and also requires the creation of the reparameterization surfaces. This algorithm is implemented in a C++ based library, which utilizes the CAD functionality of the Open CASCADE framework. The library is designed to be used on computing clusters by providing functions for the serialization and deserialization of the geometry. This enables the parallelized projection and back-projection of large computation meshes with millions of points to reduce the computational runtime. Since this algorithm only works for small geometry changes - i.e. for geometries with the same topology - we also added functions to compare and store the topology of two CAD objects. Our method is currently used within a DLR-internal project to enable a large scale gradient based optimization of an aircraft. In our work flow, the software TiGL converts the parametric aircraft description into a CAD representation. The initial structured mesh is created with a commercial mesh generator. In the subsequent iterations of the optimization, the mesh is deformed using the presented method. Details to the robustness of this algorithm and its computational performance will be presented at the talk

    A Generic Parametric Modeling Engine Targeted Towards Multidisciplinary Design: Goals and Concepts

    Get PDF
    This paper presents the design concept of a generic parametric modeling engine that is completely decoupled from geometry generation. Driven by requirements extracted from preliminary multidisciplinary airplane design, the presented software architecture provides a platform that enables an interplay of different modeling and simulation tools on the one hand, and their efficient execution in a parametric tree on the other hand. An integrated plugin system allows users to define custom plugins exposing arbitrary types and functions. All geometric functionality is provided via plugins, decoupling it entirely from the parametric engine. First, we specify the goals that the software framework needs to fulfill, elaborating on the requirements encountered in early aircraft design. Then, we describe the software architecture and its modules, realized as a C++ library. As such, the software is a back-end that can be used by third party developers to create user-friendly and interoperable tools. The core of the framework is a parametric engine called grunk with its integrated plugin system and serialization functionality. The key feature of grunk is the possibility for users to define custom types in plugins and their use in the parametric tree. Geometric modeling functionalities are provided through the plugins grocc and geo: the first integrating OpenCascade Technology's functionalities and the latter extending it. A major feature on the geometry side is the provision of derivatives through algorithmic differentiation, making the framework particularly suitable for gradient-based optimization applications. Finally, we demonstrate the use of the software via examples and show the results

    Accelerating Neural Network Training with Distributed Asynchronous and Selective Optimization (DASO)

    Get PDF
    With increasing data and model complexities, the time required to train neural networks has become prohibitively large. To address the exponential rise in training time, users are turning to data parallel neural networks (DPNN) and large-scale distributed resources on computer clusters. Current DPNN approaches implement the network parameter updates by synchronizing and averaging gradients across all processes with blocking communication operations after each forward-backward pass. This synchronization is the central algorithmic bottleneck. We introduce the Distributed Asynchronous and Selective Optimization (DASO) method, which leverages multi-GPU compute node architectures to accelerate network training while maintaining accuracy. DASO uses a hierarchical and asynchronous communication scheme comprised of node-local and global networks while adjusting the global synchronization rate during the learning process. We show that DASO yields a reduction in training time of up to 34% on classical and state-of-the-art networks, as compared to current optimized data parallel training methods

    HeAT – a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    In order to cope with the exponential growth in available data, the efficiency of data analysis and machine learning libraries have recently received increased attention. Although corresponding array-based numerical kernels have been significantly improved, most are limited by the resources available on a single computational node. Consequently, kernels must exploit distributed resources, e.g., distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload via MPI on arbitrarily large high-performance computing systems. It provides both low-level array-based computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take advantage of their available resources, significantly lowering the barrier to distributed data analysis. Compared with applications written in similar frameworks, HeAT achieves speedups of up to two orders of magnitude

    HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics

    Get PDF
    To cope with the rapid growth in available data, the efficiency of data analysis and machine learning libraries has recently received increased attention. Although great advancements have been made in traditional array-based computations, most are limited by the resources available on a single computation node. Consequently, novel approaches must be made to exploit distributed resources, e.g. distributed memory architectures. To this end, we introduce HeAT, an array-based numerical programming framework for large-scale parallel processing with an easy-to-use NumPy-like API. HeAT utilizes PyTorch as a node-local eager execution engine and distributes the workload on arbitrarily large high-performance computing systems via MPI. It provides both low-level array computations, as well as assorted higher-level algorithms. With HeAT, it is possible for a NumPy user to take full advantage of their available resources, significantly lowering the barrier to distributed data analysis. When compared to similar frameworks, HeAT achieves speedups of up to two orders of magnitude.Comment: 10 pages, 8 figures, 5 listings, 1 tabl
    • …
    corecore